Marginalized Denoising Autoencoder via Graph Regularization for Domain Adaptation
نویسندگان
چکیده
Domain adaptation, which aims to learn domain-invariant features for sentiment classification, has received increasing attention. The underlying rationality of domain adaptation is that the involved domains share some common latent factors. Recently neural network based on Stacked Denoising Auto-Encoders (SDA) and its marginalized version (mSDA) have shown promising results on learning domain-invariant features. To explicitly preserve the intrinsic structure of data, this paper proposes a marginalized Denoising Autoencoders via graph Regularization (GmSDA) in which the autoencoder based framework can learn more robust features with the help of newly incorporated graph regularization. The learned representations are fed into the sentiment classifiers and experiments show that the GmSDA can effectively improve the classification accuracy when comparing with some state-of-the-art models on the cropped Amazon benchmark data set.
منابع مشابه
Unsupervised Domain Adaptation with Regularized Domain Instance Denoising
We propose to extend the marginalized denoising autoencoder (MDA) framework with a domain regularization whose aim is to denoise both the source and target data in such a way that the features become domain invariant and the adaptation gets easier. The domain regularization, based either on the maximum mean discrepancy (MMD) measure or on the domain prediction, aims to reduce the distance betwe...
متن کاملA Domain Adaptation Regularization for Denoising Autoencoders
Finding domain invariant features is critical for successful domain adaptation and transfer learning. However, in the case of unsupervised adaptation, there is a significant risk of overfitting on source training data. Recently, a regularization for domain adaptation was proposed for deep models by (Ganin and Lempitsky, 2015). We build on their work by suggesting a more appropriate regularizati...
متن کاملMarginalized Stacked Denoising Autoencoders
Stacked Denoising Autoencoders (SDAs) [4] have been used successfully in many learning scenarios and application domains. In short, denoising autoencoders (DAs) train one-layer neural networks to reconstruct input data from partial random corruption. The denoisers are then stacked into deep learning architectures where the weights are fine-tuned with back-propagation. Alternatively, the outputs...
متن کاملAn Extended Framework for Marginalized Domain Adaptation
We propose an extended framework for marginalized domain adaptation, aimed at addressing unsupervised, supervised and semisupervised scenarios. We argue that the denoising principle should be extended to explicitly promote domain-invariant features as well as help the classification task. Therefore we propose to jointly learn the data auto-encoders and the target classifiers. First, in order to...
متن کاملMarginalizing stacked linear denoising autoencoders
Stacked denoising autoencoders (SDAs) have been successfully used to learn new representations for domain adaptation. They have attained record accuracy on standard benchmark tasks of sentiment analysis across different text domains. SDAs learn robust data representations by reconstruction, recovering original features from data that are artificially corrupted with noise. In this paper, we prop...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2013